今天是第二十一天我們可以寫一個基於人工智慧去分析斑馬魚軌跡影片的系統,去判斷這些斑馬魚是否有互相影響、社教行為的分析、互相學習的分析,以下是lstm程式碼
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense, Dropout, Concatenate, Attention, LayerNormalization
from tensorflow.keras.optimizers import Adam
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# 假設我們已經有多模態斑馬魚軌跡數據
# inputs.shape = (samples, timesteps, features_per_channel)
# labels.shape = (samples, num_classes)
# 生成假數據來說明問題
num_samples = 1000
timesteps = 30
num_channels = 3 # 例如:距離、相對角度、速度變化等
num_features_per_channel = 5
num_classes = 3 # 例如:互相影響、社交行為、互相學習
inputs = [np.random.rand(num_samples, timesteps, num_features_per_channel) for _ in range(num_channels)]
labels = np.random.randint(0, num_classes, num_samples)
labels = tf.keras.utils.to_categorical(labels, num_classes)
# 將數據分為訓練集和測試集
X_train = [train_test_split(channel, test_size=0.2, random_state=42)[0] for channel in inputs]
X_test = [train_test_split(channel, test_size=0.2, random_state=42)[1] for channel in inputs]
y_train, y_test = train_test_split(labels, test_size=0.2, random_state=42)
# 標準化特徵數據
scalers = [StandardScaler() for _ in range(num_channels)]
X_train = [scaler.fit_transform(channel.reshape(-1, num_features_per_channel)).reshape(-1, timesteps, num_features_per_channel) for scaler, channel in zip(scalers, X_train)]
X_test = [scaler.transform(channel.reshape(-1, num_features_per_channel)).reshape(-1, timesteps, num_features_per_channel) for scaler, channel in zip(scalers, X_test)]
# 構建多模態LSTM模型
inputs = [Input(shape=(timesteps, num_features_per_channel)) for _ in range(num_channels)]
lstm_outs = [LSTM(64, return_sequences=True)(input_layer) for input_layer in inputs]
# 添加注意力機制
attention_outs = []
for lstm_out in lstm_outs:
attention = Attention()([lstm_out, lstm_out])
attention_out = LayerNormalization()(attention)
attention_outs.append(attention_out)
# 合併多模態輸入
merged = Concatenate()(attention_outs)
x = LSTM(128)(merged)
x = Dropout(0.5)(x)
x = Dense(64, activation='relu')(x)
x = Dropout(0.5)(x)
output = Dense(num_classes, activation='softmax')(x)
# 建立模型
model = Model(inputs=inputs, outputs=output)
# 編譯模型
model.compile(optimizer=Adam(learning_rate=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
# 訓練模型
model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_test, y_test))
# 評估模型
loss, accuracy = model.evaluate(X_test, y_test)
print(f'測試損失: {loss:.4f}, 測試準確率: {accuracy:.4f}')
# 預測新數據
new_data = [np.random.rand(1, timesteps, num_features_per_channel) for _ in range(num_channels)]
prediction = model.predict(new_data)
predicted_class = np.argmax(prediction, axis=1)
print(f'預測的行為類別: {predicted_class}')
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, LSTM, Dense, Dropout, Concatenate, Attention, LayerNormalization
from tensorflow.keras.optimizers import Adam
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
這部分是導入必要的庫,包括 numpy
用於數據操作,tensorflow
用於深度學習模型構建,sklearn
用於數據處理和分割。
num_samples = 1000
timesteps = 30
num_channels = 3 # 例如:距離、相對角度、速度變化等
num_features_per_channel = 5
num_classes = 3 # 例如:互相影響、社交行為、互相學習
inputs = [np.random.rand(num_samples, timesteps, num_features_per_channel) for _ in range(num_channels)]
labels = np.random.randint(0, num_classes, num_samples)
labels = tf.keras.utils.to_categorical(labels, num_classes)
這段代碼生成了假數據,模擬多模態(多個輸入通道)的斑馬魚行為數據。每個輸入通道(例如距離、相對角度、速度變化)都有 num_features_per_channel
個特徵,並在 timesteps
個時間步中捕獲這些特徵。labels
是行為的分類標籤,分為三類:互相影響、社交行為和互相學習。
X_train = [train_test_split(channel, test_size=0.2, random_state=42)[0] for channel in inputs]
X_test = [train_test_split(channel, test_size=0.2, random_state=42)[1] for channel in inputs]
y_train, y_test = train_test_split(labels, test_size=0.2, random_state=42)
scalers = [StandardScaler() for _ in range(num_channels)]
X_train = [scaler.fit_transform(channel.reshape(-1, num_features_per_channel)).reshape(-1, timesteps, num_features_per_channel) for scaler, channel in zip(scalers, X_train)]
X_test = [scaler.transform(channel.reshape(-1, num_features_per_channel)).reshape(-1, timesteps, num_features_per_channel) for scaler, channel in zip(scalers, X_test)]
在這部分,我們將數據分為訓練集和測試集(80% 訓練,20% 測試)。之後,對每個通道的數據進行標準化處理,這樣能夠確保每個特徵在同一量級上,從而提高模型的訓練效果。
inputs = [Input(shape=(timesteps, num_features_per_channel)) for _ in range(num_channels)]
lstm_outs = [LSTM(64, return_sequences=True)(input_layer) for input_layer in inputs]
這部分是模型的輸入層。我們為每個通道創建了一個 Input
層,定義其形狀為 (timesteps, num_features_per_channel)
,即時間步長和特徵數量。每個輸入通道經過一個 LSTM
層進行處理,該層具有 64 個單元,並設置 return_sequences=True
以返回所有時間步的輸出,這樣可以保留時間序列的完整信息。
attention_outs = []
for lstm_out in lstm_outs:
attention = Attention()([lstm_out, lstm_out])
attention_out = LayerNormalization()(attention)
attention_outs.append(attention_out)
在這裡,我們使用注意力機制來強調時間步或特徵的重要性。Attention
層通過計算每個時間步的加權平均來突出重要的時間步,並進行標準化處理 (LayerNormalization
),以穩定模型的訓練過程。每個輸入通道都經過這一處理後,得到的結果會存儲在 attention_outs
列表中。
merged = Concatenate()(attention_outs)
x = LSTM(128)(merged)
x = Dropout(0.5)(x)
x = Dense(64, activation='relu')(x)
x = Dropout(0.5)(x)
output = Dense(num_classes, activation='softmax')(x)
這部分代碼將所有注意力處理後的輸出通道合併 (Concatenate
) 在一起,並通過另一層 LSTM
進行進一步處理,這次 LSTM
層的單元數增加到 128。這樣做的目的是捕捉各輸入通道之間的更高階依賴性。
隨後,使用 Dropout
層防止過擬合,並通過 Dense
層將數據映射到更小的維度(64 個單元),最後通過 softmax
層將輸出映射到最終的分類結果。
model = Model(inputs=inputs, outputs=output)
model.compile(optimizer=Adam(learning_rate=0.001), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_test, y_test))
這部分代碼創建了模型,將前面的輸入層和輸出層連接在一起,並使用 Adam
優化器和 categorical_crossentropy
損失函數來編譯模型。接著,模型在訓練數據集上進行訓練,並在測試數據集上進行驗證。
loss, accuracy = model.evaluate(X_test, y_test)
print(f'測試損失: {loss:.4f}, 測試準確率: {accuracy:.4f}')
new_data = [np.random.rand(1, timesteps, num_features_per_channel) for _ in range(num_channels)]
prediction = model.predict(new_data)
predicted_class = np.argmax(prediction, axis=1)
print(f'預測的行為類別: {predicted_class}')
最後,我們在測試集上評估模型的性能,並輸出損失和準確率。然後,生成新的假數據,通過模型進行預測,並輸出預測的行為類別。
這個系統通過多個LSTM層和注意力機制來分析斑馬魚的行為軌跡數據,能夠捕捉多模態特徵之間的複雜交互,並預測斑馬魚的行為類別。這樣的模型結構適合於處理多模態輸入和時間序列的問題,並且可以擴展到其他類似的行為分析應用。